Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Generating CP-nets with bounded tree width based on Dandelion code
LI Congcong, LIU Jinglei
Journal of Computer Applications    2021, 41 (1): 112-120.   DOI: 10.11772/j.issn.1001-9081.2020060972
Abstract250)      PDF (1221KB)(755)       Save
Aiming at the problem of high time complexity of Conditional Preference networks (CP-nets) graph model in reasoning computation, a Generating CP-nets with Bounded Tree Width based on Dandelion code (BTW-CP-nets Gen) algorithm was proposed. First, through the principle of bidirectional mapping between Dandelion code and tree structure with tree width k ( k-tree), the decoding and encoding algorithms between Dandelion code and k-tree were derived to realize the one-to-one mapping between code and tree structure. Second, the k-tree was used to constrain the tree width of CP-nets structure, and the k-tree feature tree was used to obtain the directed acyclic graph structure of CP-nets. Finally, the bijection of discrete multi-valued functions was used to calculate the conditional preference table of each CP-nets node, and the dominant query test was executed to the generated bounded tree-width CP-nets. Theoretical analysis and experimental data show that, compared with the Pruffer code generating k-tree (Pruffer code) algorithm, BTW-CP-nets Gen algorithm has the running time on generating simple and complex structures reduced by 21.1% and 30.5% respectively,and the node traversal ratio of the graph model generated by BTW-CP-nets Gen in the dominant query is 18.48% and 29.03% higher on simple structure and complex structure respectively; the smaller the time consumed by BTW-CP-nets Gen algorithm, the higher the traversal node ratio of the dominant query. It can be seen that BTW-CP-nets Gen algorithm can effectively improve the algorithm efficiency in graph model reasoning.
Reference | Related Articles | Metrics
Optimal coalition structure generation in monotonous overlapping coalition
GUO Zhipeng, LIU Jinglei
Journal of Computer Applications    2021, 41 (1): 103-111.   DOI: 10.11772/j.issn.1001-9081.2020060973
Abstract265)      PDF (1073KB)(395)       Save
Aiming at the problem that Overlapping Coalition Structure Generation (OCSG) in the Framework of cooperative games with Overlapping Coalitions (OCF games) is difficult to solve, an effective algorithm based on greedy method was proposed. First, the OCF games with number of coalition k constraints (kOCF games) was used to limit the scale of the OCSG problem. Then, a similarity measure was introduced to represent the similarity between any two coalition structures, and the property of monotonicity was defined based on the similarity measure, which means that the higher the similarity between a coalition structure and optimal coalition structure, the greater the monotonicity value of this coalition. Finally, for the kOCF games with monotonicity, the method of inserting player numbers one by one to approximate the optimal coalition structure was used to design the Coalition Constraints Greed (CCG) algorithm to solve the given OCSG problem, and the complexity of CCG algorithm was proved to be O( n2 k+1) theoretically. The influences of different parameters and coalition value distribution on the performance of the proposed algorithm were analyzed and verified through experiments, and this algorithm was compared with the algorithm proposed by Zick et al. (ZICK Y, CHALKIADAKIS G, ELKIND E, et al. Cooperative games with overlapping coalitions:charting the tractability frontier. Artificial Intelligence, 2019, 271:74-97) in the terms such as constraint condition. The results show that when the maximum number of coalitions k is constrained by a constant, the search times of the proposed algorithm increase linearly with the number of agents. It can be seen that CCG algorithm is tractable with the fixed-parameter k and has better applicability.
Reference | Related Articles | Metrics
Brain network feature identification algorithm for Alzheimer's patients based on MRI image
ZHU Lin, YU Haitao, LEI Xinyu, LIU Jing, WANG Ruofan
Journal of Computer Applications    2020, 40 (8): 2455-2459.   DOI: 10.11772/j.issn.1001-9081.2019122105
Abstract473)      PDF (915KB)(341)       Save
In view of the problem of subjectivity and easy misdiagnosis in the artificial identification of Alzheimer's Disease (AD) through brain imaging, a method of automatic identification of AD by constructing brain network based on Magnetic Resonance Imaging (MRI) image was proposed. Firstly, MRI images were superimposed and were divided into structural blocks, and the Structural SIMilarity (SSIM) between any two structural blocks was calculated to construct the network. Then, the complex network theory was used to extract structural parameters, which were used as the input of machine learning algorithm to realize the AD automatic identification. The analysis found that the classification effect was optimal with two parameters, especially the node betweenness and edge betweenness were taken as the input. Further study found that the classification effect was optimal when MRI image was divided into 27 structural blocks, and the accuracy of weighted network and unweighted network was up to 91.04% and 94.51% respectively. The experimental results show that the complex network of structural similarity based on MRI block division can identify AD with higher accuracy.
Reference | Related Articles | Metrics
Focused crawler method combining ontology and improved Tabu search for meteorological disaster
LIU Jingfa, GU Yaoping, LIU Wenjie
Journal of Computer Applications    2020, 40 (8): 2255-2261.   DOI: 10.11772/j.issn.1001-9081.2019122238
Abstract387)      PDF (1325KB)(439)       Save
Considering the problems that the traditional focused crawler is easy to fall into local optimum and has insufficient topic description, a focused crawler method combining Ontology and Improved Tabu Search (On-ITS) was proposed. First, the topic semantic vector was calculated by ontology semantic similarity, and the Web page text feature vector was constructed by Hyper Text Markup Language (HTML) Web page text feature position weighting. Then, the vector space model was used to calculate the topic relevance of Web pages. On this basis, in order to analyze the comprehensive priority of link, the topic relevance of the link anchor text and the PR (PageRank) value of Web page to the link were calculated. In addition, to avoid the crawler falling into local optimum, the focused crawler based on ITS was designed to optimize the crawling queue. Experimental results of the focused crawler on the topics of rainstorm disaster and typhoon disaster show that, under the same environment, the accuracy of the On-ITS method is higher than those of the contrast algorithms by maximum of 58% and minimum of 8%, and other evaluation indicators of the proposed algorithm are also very excellent. On-ITS focused crawler method can effectively improve the accuracy of obtaining domain information and catch more topic-related Web pages.
Reference | Related Articles | Metrics
Object detection algorithm based on asymmetric hourglass network structure
LIU Ziwei, DENG Chunhua, LIU Jing
Journal of Computer Applications    2020, 40 (12): 3526-3533.   DOI: 10.11772/j.issn.1001-9081.2020050641
Abstract441)      PDF (1337KB)(788)       Save
Anchor-free deep learning based object detection is a mainstream single-stage object detection algorithm. An hourglass network structure that incorporates multiple layers of supervisory information can significantly improve the accuracy of the anchor-free object detection algorithm, but its speed is much lower than that of a common network at the same level, and the features of different scale objects will interfere with each other. In order to solve the above problems, an object detection algorithm based on asymmetric hourglass network structure was proposed. The proposed algorithm is not constrained by the shape and size when fusing the features of different network layers, and can quickly and efficiently abstract the semantic information of network, making it easier for the model to learn the differences between various scales. Aiming at the problem of object detection at different scales, a multi-scale output hourglass network structure was designed to solve the problem of feature mutual interference between different scale objects and refine the output detection results. In addition, a special non-maximum suppression algorithm for multi-scale outputs was used to improve the recall rate of the detection algorithm. Experimental results show that the AP50 index of the proposed algorithm on Common Objects in COntext (COCO) dataset reaches 61.3%, which is 4.2 percentage points higher than that of anchor-free network CenterNet. The proposed algorithm surpasses the original algorithm in the balance of accuracy and time, and is particularly suitable for real-time object detection in industry.
Reference | Related Articles | Metrics
Fast spectral clustering algorithm without eigen-decomposition
LIU Jingshu, WANG Li, LIU Jinglei
Journal of Computer Applications    2020, 40 (12): 3413-3422.   DOI: 10.11772/j.issn.1001-9081.2020061040
Abstract410)      PDF (1407KB)(509)       Save
The traditional spectral clustering algorithm needs too much time to perform eigen-decomposition when the number of samples is very large. In order to solve the problem, a fast spectral clustering algorithm without eigen-decomposition was proposed to reduce the time overhead by multiplication update iteration. Firstly, the Nyström algorithm was used for random sampling in order to establish the relationship between the sampling matrix and the original matrix. Then, the indicator matrix was updated iteratively based on the principle of multiplication update iteration. Finally, the correctness and convergence analysis of the designed algorithm were given theoretically. The proposed algorithm was tested on five widely used real datasets and three synthetic datasets. Experimental results on real datasets show that:the average Normalized Mutual Information (NMI) of the proposed algorithm is 0.45, which is improved by 12.5% compared with that of the k-means clustering algorithm; the computing time of the proposed algorithm achieves 61.73 s, which is decreased by 61.13% compared with that of the traditional spectral clustering algorithm; and the performance of the proposed algorithm is superior to that of the hierarchical clustering algorithm, which verify the effectiveness of the proposed algorithm.
Reference | Related Articles | Metrics
Blood pressure prediction with multi-factor cue long short-term memory model
LIU Jing, WU Yingfei, YUAN Zhenming, SUN Xiaoyan
Journal of Computer Applications    2019, 39 (5): 1551-1556.   DOI: 10.11772/j.issn.1001-9081.2018110008
Abstract396)      PDF (866KB)(462)       Save
Hypertension is an important hazard to health. Blood pressure prediction is of great importance to avoid grave consequences caused by sudden increase of blood pressure. Based on traditional Long Short-Term Memory (LSTM) network, a multi-factor cue LSTM model for both short-term prediction (predicting blood pressure for the next day) and long-term prediction (predicting blood pressure for the next several days) was proposed to provide early warning of undesirable change of blood pressure. Multi-factor cues used in blood pressure prediction model included time series data cues (e.g. heart rate) and contextual information cues (e.g. age, BMI (Body Mass Index), gender, temperature).The change characteristics of time series data and data features of other associated attributes were extracted in the blood pressure prediction. Environment factor was firstly considered in blood pressure prediction and multi-task learning method was used to help the model to capture the relation between data and improve the generalization ability of the model. The experimental results show that compared with traditional LSTM model and the LSTM with Contextual Layer (LSTM-CL) model, the proposed model decreases prediction error and prediction bias by 2.5%, 3.8% and 1.9%, 3.2% respectively for diastolic blood pressure, and reduces prediction error and prediction bias by 0.2%, 0.1% and 0.6%, 0.3% respectively for systolic blood pressure.
Reference | Related Articles | Metrics
Fast scale adaptive object tracking algorithm with separating window
YANG Chunde, LIU Jing, QU Zhong
Journal of Computer Applications    2019, 39 (4): 1145-1149.   DOI: 10.11772/j.issn.1001-9081.2018081821
Abstract501)      PDF (807KB)(247)       Save
In order to solve the problem of object drift caused by Kernelized Correlation Filter (KCF) tracking algorithm when scale changes, a Fast Scale Adaptive tracking of Correlation Filter (FSACF) was proposed. Firstly, a global gradient combination feature map based on salient color features was obtained by directly extracting features for the original frame image, reducing the effect of subsequent scale calculation on the performance. Secondly, the method of separating window was performed on the global feature map, adaptively selecting the scale and calculating the corresponding maximum response value. Finally, a defined confidence function was used to adaptively update the iterative template function, improving robustness of the model. Experimental result on video sets with different interference attributes show that compared with KCF algorithm, the accuracy of the FSACF algorithm by was improved 7.4 percentage points, and the success rate was increased by 12.8 percentage points; compared with the algorithm without global feature and separating window, the Frames Per Second was improved by 1.5 times. The experimental results show that the FSACF algorithm avoids the object drift when facing scale change with certain efficiency, and is superior to the comparison algorithms in accuracy and success rate.
Reference | Related Articles | Metrics
Bus arrival time prediction system based on Spark and particle filter algorithm
LIU Jing, XIAO Guanfeng
Journal of Computer Applications    2019, 39 (2): 429-435.   DOI: 10.11772/j.issn.1001-9081.2018081800
Abstract571)      PDF (1285KB)(311)       Save
To improve the accuracy of bus arrival time prediction, a Particle Filter (PF) algorithm with stream computing characteristic was used to establish a bus arrival time prediction model. In order to solve the problems of prediction error and particle optimization in the process of using PF algorithm, the prediction model was improved by introducing the latest bus speed and constructing observations, making it predict bus arrival time closer to the actual road conditions and simultaneously predict the arrival times of multiple buses. Based on the above model and Spark platform, a real-time bus arrival time prediction software system was implemented. Compared with actual results, for the off-peak period, the maximum absolute error was 207 s, and the mean absolute error was 71.67 s; for the peak period, the maximum absolute error was 270 s, and the mean absolute error was 87.61 s. The mean absolute error of the predicted results was within 2 min which was a recognized ideal result. The experimental results show that the proposed model and implementated system can accurately predict bus arrival time and meet passengers' actual demand.
Reference | Related Articles | Metrics
Preference feature extraction based on Nyström method
YANG Meijiao, LIU Jinglei
Journal of Computer Applications    2018, 38 (9): 2515-2522.   DOI: 10.11772/j.issn.1001-9081.2018020296
Abstract635)      PDF (1373KB)(298)       Save
To solve the problem of low feature extraction efficiency in movie scoring, a Nyström method combined with QR decomposition was proposed. Firstly, sampling was performed using an adaptive method, QR decomposition of the internal matrix was performed, and the decomposed matrix was recombined with the internal matrix for feature decomposition. The approximate process of Nyström method was closely related to the number of selected landmarks and the process of selecting marker points. A series of point markers were selected to ensure the similarity after sampling. The adaptive sampling method can ensure the accuracy of approximation. QR decomposition can ensure the stability of the matrix and improve the accuracy of the preference feature extraction. The higher the accuracy of the preference feature extraction, the higher the stability of the recommendation system and the higher the accuracy of the recommendation. Finally, a feature extraction experiment was performed on a dataset of actual audience ratings of movies. The movie rating dataset contained 480189 users and 17770 movies. The experimental results show that when extracting the same number of landmarks, accuracy and efficiency of the improved Nyström method are improved to a certain degree, the time complexity is reduced from original O( n 3) to O( nc 2) ( c<< n) compared to pre-sampling. Compared with the standard Nyström method, the error is controlled below 25%.
Reference | Related Articles | Metrics
Stateful group rekeying scheme with tunable collusion resistance
AO Li, LIU Jing, YAO Shaowen, WU Nan
Journal of Computer Applications    2018, 38 (5): 1372-1376.   DOI: 10.11772/j.issn.1001-9081.2017102413
Abstract411)      PDF (914KB)(319)       Save
Logical Key Hierarchy (LKH) protocol has been proved that O(log n) is the lower bound of the communication complexity when resisting complete collusion attacks. However, in some resource-constrained or commercial application environments, user still require the communication overhead below O(log n). Although Stateful Exclusive Complete Subtree (SECS) protocol has the characteristic of constant communication overhead, but it can only resist single-user attacks. Considering the willingness of users to sacrifice some security to reduce communication overhead, based on LKH which has the characteristic of strict confidentiality, and combined with SECS which has constant communication overhead, a Hybrid Stateful Exclusive Complete Subtree (H-SECS) was designed and implemented. The number of subgroups was configured by H-SECS according to the security level of application scenario to make an optimal tradeoff between communication overhead and collusion resistance ability. Theoretical analysis and simulation results show that, compared with LKH protocol and SECS protocol, the communication overhead of H-SECS can be regulated in the ranges between O(1) and O(log n).
Reference | Related Articles | Metrics
Whole process optimized garbage collection for solid-state drives
FANG Caihua, LIU Jingning, TONG Wei, GAO Yang, LEI Xia, JIANG Yu
Journal of Computer Applications    2017, 37 (5): 1257-1262.   DOI: 10.11772/j.issn.1001-9081.2017.05.1257
Abstract1082)      PDF (1128KB)(526)       Save
Due to NAND flash' inherent restrictions like erase-before-write and a large erase unit, flash-based Solid-State Drives (SSD) demand garbage collection operations to reclaim invalid physical pages. However, the high overhead caused by garbage collection significantly decrease the performance and lifetime of SSD. Garbage collection performance will be more serious, especially when the data fragments of SSD are frequently used. Existing Garbage Collection (GC) algorithms only focus on some steps of the garbage collection operation, and none of them provids a comprehensive solution that takes into consideration all the steps of the GC process. On the basis of detailed analysis of the GC process, a whole process optimized garbage collection algorithm named WPO-GC (Whole Process Optimized Garbage Collection) was proposed, which integrated optimizations on each step of the GC in order to reduce the negative impact on normal read/write requests and SSD' lifetime at the greatest extent. Moreover, the WPO-GC was implemented on SSDsim which is an open source SSD simulator to evaluate its efficiency. The experimental results show that the proposed algorithm can decreases read I/O response time by 20%-40% and write I/O response time by 17%-40% respectively, and balance wear nearly 30% to extend the lifetime, compared with typical GC algorithm.
Reference | Related Articles | Metrics
Improving feature selection and matrix recovery ability by CUR matrix decomposition
LEI Hengxin, LIU Jinglei
Journal of Computer Applications    2017, 37 (3): 640-646.   DOI: 10.11772/j.issn.1001-9081.2017.03.640
Abstract572)      PDF (1235KB)(423)       Save
To solve the problem that users and products can not be accurately selected in large data sets, and the problem that user behavior preference can not be predicted accurately, a new method of CUR (Column Union Row) matrix decomposition was proposed. A small number of columns were selected from the original matrix to form the matrix C, and a small number of rows were selected to form the matrix R. Then, the matrix U was constructed by Orthogonal Rotation (QR) matrix decomposition. The matrixes C and R were feature matrixes of users and products respectively, which were composed of real data, and enabled to reflect the detailed characters of both users as well as products. In order to predict behavioral preferences of users accurately, the authors improved the CUR algorithm in this paper, endowing it with greater stability and accuracy in terms of matrix recovery. Lastly, the experiment based on real dataset (Netflix dataset) indicates that, compared with traditional singular value decomposition, principal component analysis and other matrix decomposition methods, the CUR matrix decomposition algorithm has higher accuracy as well as better interpretability in terms of feature selection, as for matrix recovery, the CUR matrix decomposition also shows superior stability and accuracy, with a preciseness of over 90%. The CUR matrix decomposition has a great application value in the recommender system and traffic flow prediction.
Reference | Related Articles | Metrics
Conditional preference mining based on MaxClique
TAN Zheng, LIU JingLei, YU Hang
Journal of Computer Applications    2017, 37 (11): 3107-3114.   DOI: 10.11772/j.issn.1001-9081.2017.11.3107
Abstract454)      PDF (1274KB)(545)       Save
In order to solve the problem that conditional constraints (context constraints) for personalized queries in database were not fully considered, a constraint model was proposed where the context i +≻i-| X means that the user prefers i + than i - based on the constraint of context X. Association rules mining algorithm based on MaxClique was used to obtain user preferences, and Conditional Preference Mining (CPM) algorithm combined with context obtained preference rules was proposed to obtain user preferences. The experimental results show that the context preference mining model has strong preference expression ability. At the same time, under the different parameters of minimum support, minimum confidence and data scale, the experimental results of preferences mining algorithm of CPM compared with Apriori algorithm and CONTENUM algorithm show that the proposed CPM algorithm can obviously improve the generation efficiency of user preferences.
Reference | Related Articles | Metrics
Improvement of sub-pixel morphological anti-aliasing algorithm
LIU Jingrong, DU Huimin, DU Qinqin
Journal of Computer Applications    2017, 37 (10): 2871-2874.   DOI: 10.11772/j.issn.1001-9081.2017.10.2871
Abstract480)      PDF (815KB)(389)       Save
Since Sub-pixel Morphological Anti-Aliasing (SMAA) algorithm extracts images with less contour and needs larger storage, an improved algorithm for SMAA was presented.In the improved algorithm, the multiplication of luminance of a pixel and an adjustable factor was regarded as a dynamic threshold, which was used to decide whether the pixel is a boundary pixel. Compared with fixed threshold for boundary decision in SMAA, the dynamic threshold is stricter for deciding whether a pixel is a boundary one, so the presented algorithm can extract more boundaries. Based on the analysis of different morphological models and used storage, some redundant storages were merged so as to reduce the size of memory. The algorithm was implemented by Microsoft DirectX SDK and HLSL under Windows 7. The experimental results show that the proposed algorithm can extract clearer boundaries and the size of the memory is reduced by 51.93%.
Reference | Related Articles | Metrics
Analysis algorithm of electroencephalogram signals for epilepsy diagnosis based on power spectral density and limited penetrable visibility graph
WANG Ruofan, LIU Jing, WANG Jiang, YU Haitao, CAO Yibin
Journal of Computer Applications    2017, 37 (1): 175-182.   DOI: 10.11772/j.issn.1001-9081.2017.01.0175
Abstract669)      PDF (1242KB)(583)       Save
Focused on poor robustness to noise of the Visibility Graph (VG) algorithm, an improved Limited Penetrable Visibility Graph (LPVG) algorithm was proposed. LPVG algorithm could map time series into networks by connecting the points of time series which satisfy the certain conditions based on the visibility criterion and the limited penetrable distance. Firstly, the performance of LPVG algorithm was analyzed. Secondly, LPVG algorithm was combined with Power Spectrum Density (PSD) to apply to the automatic identification of epileptic ElectroEncephaloGram (EEG) before, during and after the seizure. Finally, the characteristic parameters of the LPVG network in the three states were extracted to study the influence of epilepsy seizures on the network topology. The simulation results show that compared with VG and Horizontal Visibility Graph (HVG), although LPVG had a high time complexity, it had strong robustness to noise in the signal:when mapping the typical periodic, random, fractal and chaos time series into networks by LPVG, it was found that as the noise intensity increased, the fluctuation rates of clustering coefficient by LPVG network were always the lowest, respectively 6.73%, 0.05%, 0.99% and 3.20%. By the PSD and LPVG analysis, it was found that epilepsy seizure had great influence on the brain energy. PSD was obviously enhanced in the delta frequency band, and significantly reduced in the theta frequency band; the topological structure of the LPVG network changed during the seizure, characterized by the independent enhanced network module, increased average path length and decreased graph index complexity. The PSD and LPVG applied in this paper could be taken as an effective measure to characterize the abnormality of the energy distribution and topological structure of single EEG signal channel, which would provide help for the pathological study and clinical diagnosis of epilepsy.
Reference | Related Articles | Metrics
Fast content distribution method of integrating P2P technology in cloud platform
LIU Jing, ZHAO Wenju
Journal of Computer Applications    2017, 37 (1): 31-36.   DOI: 10.11772/j.issn.1001-9081.2017.01.0031
Abstract591)      PDF (999KB)(427)       Save
The HyperText Transfer Protocol (HTTP) is usually adopted in the content distribution process for data transferring in cloud storage service. When large number of users request to download the same file from the cloud storage server in a short time, the cloud server bandwidth pressure becomes so large, and further the download becomes very slow. Aiming at this problem, the P2P technology was integrated into fast content distribution for cloud platform, and a dynamic protocol conversion mechanism was proposed to achieve fast and better content distribution process. Four protocol conversion metrics, including user type, service quality, time yield and bandwidth gains, were selected, and OpenStack cloud platform was utilized to realize the proposed protocol conversion method. Compared with the pure HTTP protocol or P2P downloading method, the experimental results show that the proposed method can guarantee client users obtaining less download time, and the bandwidth of service provider is saved effectively as there are many P2P clients.
Reference | Related Articles | Metrics
Mining Ceteris Paribus preference from preference database
XIN Guanlin, LIU Jinglei
Journal of Computer Applications    2016, 36 (8): 2092-2098.   DOI: 10.11772/j.issn.1001-9081.2016.08.2092
Abstract375)      PDF (1198KB)(449)       Save
Focusing on the issue that the traditional recommendation system requires users to give a clear preference matrix (U-I matrix) and then uses automation technology to capture the user preferences, a method for mining preference information of Agent from preference database was introduced. From the perspective of knowledge discovery, a k order preference mining algorithm named kPreM was proposed according to the Ceteris Paribus rules (CP rules). The k order CP rules were used to prune the information in the preference database, which decreased the database scanning times and increased the efficiency of mining preference. Then a general graph model named CP-nets (Conditional Preference networks) was used as a tool to reveal that the user preferences can be approximated by the corresponding CP-nets. The theoretical analysis and simulation results show that the user preferences are conditional preferences. In addition, the xcavation of CP-nets preference model provides a theoretical basis for designing personalized recommendation system.
Reference | Related Articles | Metrics
Estimation algorithm of switching speech power spectrum for automatic speech recognition system
LIU Jingang, ZHOU Yi, MA Yongbao, LIU Hongqing
Journal of Computer Applications    2016, 36 (12): 3369-3373.   DOI: 10.11772/j.issn.1001-9081.2016.12.3369
Abstract608)      PDF (922KB)(447)       Save
In order to solve the poor robust problem of Automatic Speech Recognition (ASR) system in noisy environment, a new estimation algorithm of switching speech power spectrum was proposed. Firstly, based on the assumption of the speech spectral amplitude was better modelled for a Chi distribution, a modified estimation algorithm of speech power spectrum based on Minimum Mean Square Error (MMSE) was proposed. Then incorporating the Speech Presence Probability (SPP), a new MMSE estimator based on SPP was obtained. Next, the new approach and the conventional Wiener filter were combined to develop a switch algorithm. With the heavy noise environment, the modified MMSE estimator was used to estimate the clean speech power spectrum; otherwise, the Wiener filter was employed to reduce calculating amount. The final estimation algorithm of switching speech power spectrum for ASR system was obtained. The experimental results show that,compared with the traditional MMSE estimator with Rayleigh prior, the recognition accurate of the proposed algorithm was averagely improved by 8 percentage points in various noise environments. The proposed algorithm can improve the robustness of the ASR system by removing the noise, and reduce the computational cost.
Reference | Related Articles | Metrics
Fault tolerance as a service method in cloud platform based on virtual machine deployment policy
LIU Xiaoxia, LIU Jing
Journal of Computer Applications    2015, 35 (12): 3530-3535.   DOI: 10.11772/j.issn.1001-9081.2015.12.3530
Abstract445)      PDF (930KB)(262)       Save
Concerning the problem that how to make full use of the resources in cloud infrastructure to satisfy various and high reliable fault tolerant requirements of cloud application systems for cloud application tenants, a cloud application tenant and service provider oriented fault tolerance as a service method in cloud platform was proposed based on virtual machine deployment policy. According to specific fault tolerant requirements from cloud application tenants, suitable fault tolerance methods with corresponding fault tolerant levels were adopted. Then, the revenue and resource usage of service provider were computed and optimized. Based on the analysis, virtual machines which providing fault tolerant services were well deployed, which could make full use of the resources in virtual machine level to provide more reliable fault tolerant services for cloud application systems and their tenants. The experimental results show that the proposed method could guarantee the revenue of service providers, and achieve more flexible and more reliable fault tolerant services for cloud application systems with multiple tenants.
Reference | Related Articles | Metrics
Game-theoretic model of active attack on steganographic system
LIU Jing TANG Guangming
Journal of Computer Applications    2014, 34 (3): 720-723.   DOI: 10.11772/j.issn.1001-9081.2014.03.0720
Abstract463)      PDF (548KB)(335)       Save

To solve the problem of active attack on steganographic system, the counterwork relationship was modeled between steganographier and active attacker. The steganographic game with embedding rate and error rate as the payoff function was proposed. With the basic theory of two-person finite zero-sum game, the equilibrium between steganographier and active attacker was analyzed and the method to obtain their strategies in equilibrium was given. Then an example case was solved to demonstrate the ideas presented in the model. This model not only provides the theoretic basis for steganographier and active attacker to determine their optimal strategies, but also brings some guidance for designing steganographic algorithms robust to active attack.

Related Articles | Metrics
Design of campus network video broadcast based on cloud computing
LIU Jing
Journal of Computer Applications    2014, 34 (2): 572-575.  
Abstract390)      PDF (793KB)(564)       Save
This article analyzed the structure of traditional campus network live video technologies. It proposed a live video design method, based on cloud computing, to solve current shortcomings and deficiencies. The article began with a description of the whole live video structure in cloud computing environment, and then made a thorough analysis on the related key issues. Finally, a test was carried out on a real campus network. This method adopted multi-layer construction thoughts. It is flexible and expandable. It substantially saves a lot of the main campus network broadband and physical calculation resources. It reduces the difficulty in the operation and maintenance of the live video platform.
Related Articles | Metrics
Mouse behavior recognition based on human computation
LIU Jing DENG Shasha TONG Jing CHEN Zhengming
Journal of Computer Applications    2014, 34 (2): 533-537.  
Abstract449)      PDF (828KB)(432)       Save
The mouse behaviors cannot be accurately recognized by the existing computer-based automatic analysis system, and the ground truth is generally achieved from experts’ annotation on a massive number of video images. However, to some extent, subjective misjudgments are unavoidable. To solve these problems, a human computation-based mouse behavior recognition method was proposed in this paper. Because of the superiority of human visual perception, and the decentralization and cooperation of the internet, human brains were treated as processors in a distributed system. Firstly, every mouse behavior frames were distributed to on-line individuals randomly, and each behavior frame was classified by a large number of on-line individuals. Secondly, all the effective classifications from the on-line individuals were collected, analyzed and processed by computer system, realizing the final mouse behavior classification based on these frame sequences. The experimental results show that the proposed method is effective to improve the correct recognition rate of mouse behaviors with limited cost.
Related Articles | Metrics
Distortion correction technique for airborne large-field-of-view lens
XU Fang LIU Jinghong
Journal of Computer Applications    2013, 33 (09): 2623-2626.   DOI: 10.11772/j.issn.1001-9081.2013.09.2623
Abstract763)      PDF (806KB)(478)       Save
For the purpose of correcting the distortion of the large-field-of-view lens used on aerial cameras, this paper proposed a method using the Matlab calib_toolbox. By calibrating the captured images of a range of angles-of-view and distances, the internal parameters and distortion coefficients of the camera were obtained, and the correct mathematical model of distortion correction was built. The proposed method made an improvement on the Bouguet method, and extended its applications to distortion correction of color images for airborne cameras via subsequent programming. Furthermore, a new and efficient backstepping reconstruction pattern matching method for image-distortion-rate analysis was proposed, which quantified the level of distortion. The simulation results show that the proposed method reduces the distortion rate of color images averagely by about 10%. As all the experimental results indicate, the proposed method is simple and efficient, and it is convenient to be readily transplanted to hardware platform for real-time image distortion correction.
Related Articles | Metrics
Color image denoising based on Gaussian weighting and manifold for high fidelity
CHEN Zhongqiu SHI Rui LIU Jingmiao
Journal of Computer Applications    2013, 33 (09): 2588-2591.   DOI: 10.11772/j.issn.1001-9081.2013.09.2588
Abstract566)      PDF (822KB)(478)       Save
Using vector method for color image denoising, the complexity of the algorithm is high and cannot achieve real-time performance. A method for high fidelity color image denoising was proposed based on Gaussian weighting and adaptive manifold. Firstly, it used the non-local means algorithm to get high-dimensional data, and used the improved Gaussian kernel to calculate the weight of color image. Secondly, splatting method was used to deal with the high-dimensional data, and a Gaussian distance-weighted projection of the colors of all pixels was performed onto each adaptive manifold. Thirdly, smooth dimensionality reduction was done on convection shape, and iterative method was used for image smoothing. Finally, the final filter response was computed for each pixel by interpolating blurred values gathered from all adaptive manifolds. The experimental results show that the algorithm has a superior denoising performance than the original one, and it also can improve real-time performance. By using this algorithm, the details can be preserved well. Peak Signal-to-Noise Ratio (PSNR) can be improved nearly 2.0dB, and Structural Similarity Index Measurement (SSIM) can be improved more than 1%.
Related Articles | Metrics
Improved artificial fish swarm algorithm based on social learning mechanism
ZHENG Yanbin LIU Jingjing WANG Ning
Journal of Computer Applications    2013, 33 (05): 1305-1329.   DOI: 10.3724/SP.J.1087.2013.01305
Abstract919)      PDF (588KB)(541)       Save
The Artificial Fish Swarm Algorithm (AFSA) has low search speed and it is difficult to obtain accurate value. To solve the problems, an improved algorithm based on social learning mechanism was proposed. In the latter optimization period, the authors used convergence and divergence behaviors to improve the algorithm. The two acts had fast search speed and high optimization accuracy, meanwhile, the divergence behavior enhanced the population diversity and the ability of skipping over the local extremum. To a certain extent, the improved algorithm enhanced the search performance. The experimental results show that the proposed algorithm is feasible and efficacious.
Reference | Related Articles | Metrics
Hand gesture recognition based on bag of features and support vector machine
ZHANG Qiu-yu WANG Dao-dong ZHANG Mo-yi LIU Jing-man
Journal of Computer Applications    2012, 32 (12): 3392-3396.   DOI: 10.3724/SP.J.1087.2012.03392
Abstract775)      PDF (855KB)(876)       Save
According to the influence of approximate skin color information or complex background, it is hard to get precise gesture contour by hand gesture segmentation, which will have effect on later gesture recognition rate and real-time interaction. Therefore, this paper proposed a gesture recognition method based on the BOF-SVM (Bag Of Features-Support Vector Machine). At first, local invariant features of the gesture images were extracted by the Scale Invariant Feature Transformation (SIFT) algorithm. Then the visual code book was generated by gesture local eigenvector (SIFT descriptors) through K-means clustering. And visual code set of every image got quantized by visual code book. As a result, the characterized vector of gesture images with fixed dimensional was obtained to train multi-class SVM classifier. This method only needed to frame the gesture area instead of segmenting gesture accurately. The experimental results indicate that the average recognition rate of the nine interactive hand gestures based on this method can reach 92.1%. Besides, it has good robustness and efficiency, and can adapt to the changes of environment.
Related Articles | Metrics
Research on decentralized communication decision in multi-Agent system
ZHENG Yan-bin GUO Ling-yun LIU Jing-jing
Journal of Computer Applications    2012, 32 (10): 2875-2878.   DOI: 10.3724/SP.J.1087.2012.02875
Abstract1039)      PDF (641KB)(395)       Save
Communication is the most effective and direct method of coordinating and cooperating among multi-Agents, but the cost of communication restricts the use of this method. In order to reduce traffic subject in the coordination of Multi-Agent System (MAS), this paper put forward a heuristic algorithm, which would make Agents choose the observation that is beneficial to team performance to communicate. The experimental results show that choosing beneficial observation to communicate could ensure the efficiency of limited communication bandwidth and improve system performance.
Reference | Related Articles | Metrics
Performance analysis of outage probability and diversity-multiplexing tradeoff for two-path relaying cooperative communications
ZHAO Yong-chi,LIU Jing-xia,LI En-yu
Journal of Computer Applications    2012, 32 (09): 2436-2440.   DOI: 10.3724/SP.J.1087.2012.02436
Abstract1146)      PDF (786KB)(588)       Save
In order to improve the spectral efficiency and outage performance of the cooperative communication system, a Two-Path Relaying (TPR) cooperative communications model based on the Decode-and-Forward (DF) protocol was proposed. Meanwhile, two data transmission modes were presented for the proposed system. The closed-form expressions for outage probability and the relations between diversity gain and multiplexing gain were derived. The simulation results show that, compared with the conventional two-relay DF protocol, the outage probability performance of the two proposed transmission models is greatly improved.
Reference | Related Articles | Metrics
Analysis on stability of continuous chaotic systems
LIU Jing-lin FENG Ming-ku
Journal of Computer Applications    2012, 32 (06): 1640-1642.   DOI: 10.3724/SP.J.1087.2012.01640
Abstract1063)      PDF (444KB)(511)       Save
The notion of k-error exhaustive entropy is proposed, which is based on exhaustive entropy used to measure the strength of random-like property of chaotic sequences, and its two basic properties are proved. Then the method is used to analyze the stability of random-like property of three common continuous chaotic systems, such as Lorenz system, R?ssler system, and Chua’s system. Simulation results show that the approach can reflect the random essence of continuous chaotic system and Chua’s system is better than Lorenz system and R?ssler system as the source of randomness.
Related Articles | Metrics